Skip to content

backlog: Otto-180 Server Meshing + SpacetimeDB deep research — game-industry competitive angle#352

Merged
AceHack merged 3 commits intomainfrom
backlog/otto-180-server-meshing-spacetime-db-research
Apr 24, 2026
Merged

backlog: Otto-180 Server Meshing + SpacetimeDB deep research — game-industry competitive angle#352
AceHack merged 3 commits intomainfrom
backlog/otto-180-server-meshing-spacetime-db-research

Conversation

@AceHack
Copy link
Copy Markdown
Member

@AceHack AceHack commented Apr 24, 2026

Summary

Aaron Otto-180: "also backlog server mesh from star citizen, our db backend when we shard it should support this style of cross shard communication like server mesh, it's amazing actually, i think space time db is similar too or not it might be orthogonal but we want to support these use cases in our backend too. do deep reserach here, this could get us lots of customers in the game industruy if we can compete with server mess/space time db".

Explicit backlog directive overrides Otto-171 freeze-state queue discipline.

Two architectures to research (likely orthogonal, Aaron's intuition is right)

  • Server Meshing (CIG / Star Citizen) — horizontal-scaling across game servers; entity handoff + state propagation across meshed-server boundaries.
  • SpacetimeDB (Clockwork Labs, Apache-2) — DB + server unified; "the database IS the server" pitch; 1000x cheaper + faster claim vs traditional MMO backend.

Zeta differentiators identified

  • Retraction-native semantics = native rollback / lag-comp / failed-transaction recovery
  • Time-travel queries compose with replay / match-review
  • Columnar storage serves game-economy analytics

Deliverable

docs/research/server-meshing-spacetimedb-comparison-zeta-sharding-fit.md with 5 sections: SM architecture / SpacetimeDB architecture / Zeta-sharding fit / competitive positioning / integration scenarios.

Customer-industry angle

3-5 named studio-types (MMO / sim / esports / mobile persistent / VR-social) with value-proposition per segment.

IP discipline

Same pattern as Otto-175c / Scientology rows: public CIG Inside-Star-Citizen + RSI blog content OK; SpacetimeDB Apache-2 code study OK; no adoption of CIG proprietary architecture docs, no positioning as CIG-adjacent / SpacetimeDB-adjacent in public branding.

Placement

P2 research-grade at line ~921 (end of "## P2 — Post-v1 query-surface research" section). NOT BACKLOG tail — 53 DIRTY siblings on tail; placing here avoids cascading another positional-append conflict.

Test plan

  • Markdownlint clean.
  • BACKLOG-only; no code changes.
  • Placement away from DIRTY-saturated tail.

🤖 Generated with Claude Code

Copilot AI review requested due to automatic review settings April 24, 2026 10:14
@AceHack AceHack enabled auto-merge (squash) April 24, 2026 10:14
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 55ce0fac69

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread docs/BACKLOG.md Outdated
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new P2 backlog research item describing a planned deep-dive into Server Meshing (Star Citizen/CIG) and SpacetimeDB, framed as input to Zeta’s eventual multi-node/sharding design and game-industry competitive positioning.

Changes:

  • Adds a detailed P2 backlog entry covering research scope, differentiators, deliverable doc target, and IP-discipline constraints.

Comment thread docs/BACKLOG.md Outdated
Comment thread docs/BACKLOG.md Outdated
Comment thread docs/BACKLOG.md Outdated
Comment thread docs/BACKLOG.md Outdated
AceHack added a commit that referenced this pull request Apr 24, 2026
Applying PR-resolve-loop discipline to #352. 5 unresolved
review threads (Copilot P2 inline-code-span + P1 cross-ref
drift + P2 IP-discipline ambiguity + P1 inline-code-span
duplicate + P1 line-reference drift). 4 addressed in this
commit; resolution replies to follow via GraphQL.

1. Inline code span split across lines 1008-1009 (Copilot
   P2 threads 59XP74 + 59XSRN, duplicate finding): Moved
   the multi-line `docs/research/server-meshing-*.md` path
   onto its own line so the backtick-delimited span renders
   atomically per CommonMark. No more rendering risk.

2. "Otto-175c starship-franchise-mapping row" cross-ref
   that didn't resolve (Copilot P1 thread 59XSQc): Clarified
   the reference to note the row landed in PR #351 (merged).
   Amara 10th + 11th ferry cross-refs updated to point at
   their archived location under `docs/aurora/2026-04-24-
   amara-*.md` paths.

3. Wire-protocol row line-number reference was `~754`,
   actual location is `~830` (Copilot P1 thread 59XSRf):
   Corrected the line hint.

4. "No Star-Citizen trademarked content ingested" IP-
   discipline bullet was ambiguous — the row itself uses
   trademarked names for reference (Copilot P2 thread
   59XSQz): Rewrote the discipline block to explicitly
   distinguish industry-landscape reference (permitted)
   from proprietary-content ingestion (excluded), with
   specific "research-permitted" boundaries for CIG's
   public Inside-Star-Citizen + RSI content and for
   SpacetimeDB's Apache-2 repo.

Framing: this commit demonstrates the PR-resolve-loop
pattern (BACKLOG row Otto-204, PR #356) on a second PR
after the pattern was first applied to #354. Active
management vs ship-and-pray. Part of the corrective
response to Otto-204c livelock-diagnosis.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 44825a88fc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread docs/BACKLOG.md Outdated
AceHack added a commit that referenced this pull request Apr 24, 2026
PR-resolve-loop applied to #336 (KSK naming definition doc).
1 CI failure + 6 unresolved review threads.

CI fix:

- docs/definitions/KSK.md:19 MD026/no-trailing-punctuation:
  "## In this project, KSK means..." → "## In this project,
  KSK means" (dropped the three dots in the heading).

Review-thread fixes:

1. docs/GLOSSARY.md:819 — "LFG/lucent-ksk" read as in-repo
   path: Rewrote to explicitly mark as the external
   repository at https://github.com/Lucent-Financial-Group/
   lucent-ksk, clarified "not a local LFG/ directory in
   this repo."

2. docs/definitions/KSK.md:158 — named individuals in
   Attribution section violated factory name-attribution
   policy: Rewrote using role references ("the human
   maintainer", "an external AI collaborator", "a trusted
   external contributor"). Direct names preserved only in
   audit-trail surfaces per policy (commit messages,
   tick-history, session memory).

3. docs/definitions/KSK.md:153 — cross-reference to
   memory/feedback_ksk_naming_unblocked_*.md that didn't
   exist in repo: Removed path reference entirely; the
   factual substance was restated in role-based prose
   without a broken-link dependency.

4. docs/definitions/KSK.md:180 — "LFG/lucent-ksk" repeated
   same in-repo-path confusion as GLOSSARY: Applied same
   fix (external repo URL + explicit "separate repo"
   framing).

5. docs/definitions/KSK.md:207 — cross-reference list
   included `docs/aurora/*-5th-ferry-*`, `*-12th-ferry-*`,
   `*-14th-ferry-*`, `*-16th-ferry-*` globs that resolve
   to zero files in the current tree: Rewrote list to
   enumerate only verified in-repo references (6th / 7th /
   17th / 19th ferries that actually exist); added
   explicit note that earlier ferries (5th / 12th / 14th
   / 16th) live in ROUND-HISTORY + session memory rather
   than as standalone docs.

6. docs/definitions/KSK.md:191 — literal "+" continuation
   line violating markdownlint + repo convention: resolved
   as a side-effect of the Attribution rewrite — the
   replacement prose doesn't use "+" continuations.

Framing: third PR where PR-resolve-loop discipline is
applied (after #354 and #352). Active management continues.
Compound lesson from Otto-204c: prior-session review-
resolution precedents now integrated into per-tick habit,
not just sitting in memory.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
Resolves 5 P1 review threads all reporting the same class of issue:
inline-code spans (backticks) and URLs broken across newlines.

CommonMark inline-code spans cannot span newlines — the `span` is
literally broken as rendered, and readers cannot copy the path
cleanly. Same for URLs: Markdown auto-linkers stop at whitespace.

Fix pattern: move the full backticked path (or URL) onto its own
line, wrapping the surrounding prose instead. No content removed.

Threads addressed:
- 59Vtvx — OpenAI Frontier URL split across line break
- 59WtwY — `docs/research/frontier-ux-zora-evolution-*.md`
  split (first occurrence, Class-a list)
- 59Wtwq — `memory/feedback_aaron_dont_wait_on_approval_log_
  decisions_frontier_ui_is_his_review_surface_*.md` split;
  also updated to the concrete landed filename
- 59Wtw8 — same UX design-doc path split (Composition section)
- 59WtxM — `docs/definitions/KSK.md` split

Same fix pattern as PR #352 (server-meshing-*.md path).
AceHack and others added 3 commits April 24, 2026 08:52
…ndustry competitive angle

Aaron Otto-180: "also backlog server mesh from star citizen,
our db backend when we shard it should support this style of
cross shard communication like server mesh, it's amazing
actually, i think space time db is similar too or not it
might be orthogonal but we want to support these use cases
in our backend too. do deep reserach here, this could get us
lots of customers in the game industruy if we can compete
with server mess/space time db".

Two architectures to research (Aaron's "might be orthogonal"
intuition is correct):

1. Server Meshing (CIG / Star Citizen) — horizontal-scaling
   across many game servers; entity handoff + state
   propagation at server boundaries. Static vs Dynamic Server
   Meshing both in scope.

2. SpacetimeDB (Clockwork Labs, Apache-2) — vertical-
   integration of DB + server; reducers as stored-procedure-
   like functions; "the database IS the server" pitch;
   claims 1000x cheaper + faster than traditional MMO
   backend.

Zeta's retraction-native DBSP substrate can plausibly
support EITHER pattern (or both). Competitive differentiators
identified:

- Retraction-native semantics (native rollback /
  lag-compensation / failed-transaction recovery).
- Time-travel queries compose with persistent-universe
  replay / match-review.
- Columnar storage serves game-economy analytics.

CIG / RSI attribution preserved (Aaron supplied): Cloud
Imperium Games = developer; Roberts Space Industries =
publishing/marketing subsidiary + in-game ship manufacturer;
founded April 2012 by Chris Roberts.

Research deliverable: docs/research/server-meshing-
spacetimedb-comparison-zeta-sharding-fit.md with 5 sections
(SM architecture / SpacetimeDB architecture / Zeta fit /
competitive positioning / integration scenarios).

Customer-industry angle: 3-5 named studio-types (MMO / sim /
esports / mobile persistent / VR-social) with value-
proposition per segment.

IP discipline (same pattern as Otto-175c + Scientology rows):
no CIG proprietary content ingested beyond public Inside-
Star-Citizen + RSI blog; SpacetimeDB Apache-2 code study
fine; no positioning as CIG-adjacent or SpacetimeDB-
adjacent in public branding (technical reference OK).

Priority P2 research-grade; effort L (deep research) +
L (design ADR when sharding graduates). Waits on Zeta
multi-node foundation (not yet shipped).

Placed in "## P2 — Post-v1 query-surface research" section
at line ~921 — NOT BACKLOG tail — to avoid positional-
append conflict pattern (53 DIRTY siblings on tail).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Applying PR-resolve-loop discipline to #352. 5 unresolved
review threads (Copilot P2 inline-code-span + P1 cross-ref
drift + P2 IP-discipline ambiguity + P1 inline-code-span
duplicate + P1 line-reference drift). 4 addressed in this
commit; resolution replies to follow via GraphQL.

1. Inline code span split across lines 1008-1009 (Copilot
   P2 threads 59XP74 + 59XSRN, duplicate finding): Moved
   the multi-line `docs/research/server-meshing-*.md` path
   onto its own line so the backtick-delimited span renders
   atomically per CommonMark. No more rendering risk.

2. "Otto-175c starship-franchise-mapping row" cross-ref
   that didn't resolve (Copilot P1 thread 59XSQc): Clarified
   the reference to note the row landed in PR #351 (merged).
   Amara 10th + 11th ferry cross-refs updated to point at
   their archived location under `docs/aurora/2026-04-24-
   amara-*.md` paths.

3. Wire-protocol row line-number reference was `~754`,
   actual location is `~830` (Copilot P1 thread 59XSRf):
   Corrected the line hint.

4. "No Star-Citizen trademarked content ingested" IP-
   discipline bullet was ambiguous — the row itself uses
   trademarked names for reference (Copilot P2 thread
   59XSQz): Rewrote the discipline block to explicitly
   distinguish industry-landscape reference (permitted)
   from proprietary-content ingestion (excluded), with
   specific "research-permitted" boundaries for CIG's
   public Inside-Star-Citizen + RSI content and for
   SpacetimeDB's Apache-2 repo.

Framing: this commit demonstrates the PR-resolve-loop
pattern (BACKLOG row Otto-204, PR #356) on a second PR
after the pattern was first applied to #354. Active
management vs ship-and-pray. Part of the corrective
response to Otto-204c livelock-diagnosis.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 24, 2026 12:53
@AceHack AceHack force-pushed the backlog/otto-180-server-meshing-spacetime-db-research branch from 44825a8 to 255ac9a Compare April 24, 2026 12:53
AceHack added a commit that referenced this pull request Apr 24, 2026
Resolves 5 P1 review threads all reporting the same class of issue:
inline-code spans (backticks) and URLs broken across newlines.

CommonMark inline-code spans cannot span newlines — the `span` is
literally broken as rendered, and readers cannot copy the path
cleanly. Same for URLs: Markdown auto-linkers stop at whitespace.

Fix pattern: move the full backticked path (or URL) onto its own
line, wrapping the surrounding prose instead. No content removed.

Threads addressed:
- 59Vtvx — OpenAI Frontier URL split across line break
- 59WtwY — `docs/research/frontier-ux-zora-evolution-*.md`
  split (first occurrence, Class-a list)
- 59Wtwq — `memory/feedback_aaron_dont_wait_on_approval_log_
  decisions_frontier_ui_is_his_review_surface_*.md` split;
  also updated to the concrete landed filename
- 59Wtw8 — same UX design-doc path split (Composition section)
- 59WtxM — `docs/definitions/KSK.md` split

Same fix pattern as PR #352 (server-meshing-*.md path).
@AceHack AceHack merged commit 3484e7b into main Apr 24, 2026
12 checks passed
@AceHack AceHack deleted the backlog/otto-180-server-meshing-spacetime-db-research branch April 24, 2026 12:55
AceHack added a commit that referenced this pull request Apr 24, 2026
…ame factory UI (#348)

* backlog: Otto-168 "Frontier" naming conflict with OpenAI Frontier — rename factory UI

Aaron Otto-168: "i just found this https://openai.com/index/
introducing-openai-frontier/ ... naming conflicts ... also
absorb everyting lol, it composes nicely ... backlog"

OpenAI announced an "OpenAI Frontier" (Otto-168 WebFetch got
403; scope TBD next tick via WebSearch or URL-retry). Factory
currently uses "Frontier UI / Frontier UX" as the public-
facing user-UI layer name. Brand conflict.

Three-class usage scope locked so rename surgically targets
the conflicting usage without disrupting technical or
industry vocabulary:

(a) CONFLICTING (rename): frontier-ux-zora-evolution design
    doc, "Frontier UI/Frontier plugin" BACKLOG rows, memory
    pointers.
(b) TECHNICAL-LITERATURE (keep): Timely-Dataflow antichain
    frontier, Naiad partial-order composition, bloom-filter
    research frontier.
(c) INDUSTRY-TERM (keep): "frontier model", "frontier LLM",
    frontier-environment confidence memory.

Rename candidates (Zora / Starboard / Bridge / Horizon /
Vantage / Aurora) with analysis; Aaron + naming-expert make
the call. 6-step action sequence filed.

Non-actions: don't rename literature/industry uses; don't
ship same-tick as discovery; don't pick name unilaterally.

Composition: Aurora+Zeta+KSK naming triangle stays intact;
DST+Cartel-Lab+Veridicality internal module names unaffected.

Priority P1 (active brand conflict); effort 3×S (scope
research + naming + rename PR).

Placed under P2 research-grade section (adjacent to Frontier
plugin inventory row, line ~4360), not BACKLOG tail — avoids
positional-append conflict pattern that cost #334 this
session.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* backlog: Otto-169 WebSearch completes OpenAI Frontier scope research — severity HIGH

WebSearch unblocked the deferred Otto-168 URL-research. OpenAI
Frontier (launched 2026-02-05) is a full enterprise AI-agent
platform — not internal research, not a model name. Direct
overlap with the factory's "Frontier UI" agent-orchestration
space. Consulting-partnership distribution (Accenture / BCG /
Capgemini / McKinsey) guarantees wide enterprise-AI
dissemination.

Severity assessment: HIGH. Filed inline to the existing BACKLOG
row.

Action sequence steps 1 (scope fetch) + 2 (severity) now
complete; steps 3-6 (naming-expert + Aaron final call +
rename PR + memory archive) still pending.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#348): markdown inline-code spans + URL continuous on one line

Resolves 5 P1 review threads all reporting the same class of issue:
inline-code spans (backticks) and URLs broken across newlines.

CommonMark inline-code spans cannot span newlines — the `span` is
literally broken as rendered, and readers cannot copy the path
cleanly. Same for URLs: Markdown auto-linkers stop at whitespace.

Fix pattern: move the full backticked path (or URL) onto its own
line, wrapping the surrounding prose instead. No content removed.

Threads addressed:
- 59Vtvx — OpenAI Frontier URL split across line break
- 59WtwY — `docs/research/frontier-ux-zora-evolution-*.md`
  split (first occurrence, Class-a list)
- 59Wtwq — `memory/feedback_aaron_dont_wait_on_approval_log_
  decisions_frontier_ui_is_his_review_surface_*.md` split;
  also updated to the concrete landed filename
- 59Wtw8 — same UX design-doc path split (Composition section)
- 59WtxM — `docs/definitions/KSK.md` split

Same fix pattern as PR #352 (server-meshing-*.md path).

* fix(#348): 2 review threads — Otto-63 dangling ref + KSK.md proposed-path clarification

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 3 comments.

Comment thread docs/BACKLOG.md
Comment on lines +1300 to +1303
protocol row (around line 830) + Regular-database
façade row (above) + Otto-175c starship-franchise-
mapping row (Star Citizen thematic research row
filed in PR #351; landed on main). Waits on Zeta
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cross-reference Pluggable wire-protocol row (around line 830) is out of date: that row currently starts at line 1056 in this file. Suggest either updating the line number to the current location or replacing the fragile line-number reference with a stable identifier (e.g., the row title) to avoid repeated drift as BACKLOG.md grows.

Suggested change
protocol row (around line 830) + Regular-database
façade row (above) + Otto-175c starship-franchise-
mapping row (Star Citizen thematic research row
filed in PR #351; landed on main). Waits on Zeta
protocol row (above) + Regular-database façade row
(above) + Otto-175c starship-franchise-mapping row
(Star Citizen thematic research row filed in PR
#351; landed on main). Waits on Zeta

Copilot uses AI. Check for mistakes.
Comment thread docs/BACKLOG.md
Comment on lines +1305 to +1309
Amara ferries referencing the multi-node future (the
11th-ferry-Temporal-Coordination-Detection + 12th-
ferry-Executive-Summary cross-references) are the
relevant priors; both are pending absorb (not yet
landed under `docs/aurora/`).
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The references to 11th-ferry-Temporal-Coordination-Detection and 12th-ferry-Executive-Summary don’t resolve to any existing docs under docs/aurora/ in the repo. Please either link to the concrete existing Amara ferry docs you mean (file paths) or remove/rename these identifiers so the backlog entry doesn’t contain dangling xrefs.

Suggested change
Amara ferries referencing the multi-node future (the
11th-ferry-Temporal-Coordination-Detection + 12th-
ferry-Executive-Summary cross-references) are the
relevant priors; both are pending absorb (not yet
landed under `docs/aurora/`).
Amara ferry material referencing the multi-node
future is the relevant prior, but that material is
still pending absorb and has not yet landed under
`docs/aurora/`).

Copilot uses AI. Check for mistakes.
Comment thread docs/BACKLOG.md
Comment on lines +1281 to +1285
- No ingestion of SpacetimeDB proprietary code or docs
beyond what's published under their Apache-2 license
at `clockworklabs/SpacetimeDB`. Public code study +
architecture-paper reading is research-permitted.
study is fine).
Copy link

Copilot AI Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wording in the SpacetimeDB IP-discipline bullet is internally inconsistent: it says "SpacetimeDB proprietary code" but the next clause frames the allowed scope as the public Apache-2 repo. Consider rephrasing to "non-public/private SpacetimeDB materials" (and explicitly allow Apache-2 repo + published docs) to avoid implying the Apache-2 code is proprietary.

Suggested change
- No ingestion of SpacetimeDB proprietary code or docs
beyond what's published under their Apache-2 license
at `clockworklabs/SpacetimeDB`. Public code study +
architecture-paper reading is research-permitted.
study is fine).
- No ingestion of non-public/private SpacetimeDB
materials. Public Apache-2-licensed code at
`clockworklabs/SpacetimeDB`, plus published docs and
architecture-paper reading, is research-permitted.

Copilot uses AI. Check for mistakes.
AceHack added a commit that referenced this pull request Apr 24, 2026
PR-resolve-loop applied to #336 (KSK naming definition doc).
1 CI failure + 6 unresolved review threads.

CI fix:

- docs/definitions/KSK.md:19 MD026/no-trailing-punctuation:
  "## In this project, KSK means..." → "## In this project,
  KSK means" (dropped the three dots in the heading).

Review-thread fixes:

1. docs/GLOSSARY.md:819 — "LFG/lucent-ksk" read as in-repo
   path: Rewrote to explicitly mark as the external
   repository at https://github.com/Lucent-Financial-Group/
   lucent-ksk, clarified "not a local LFG/ directory in
   this repo."

2. docs/definitions/KSK.md:158 — named individuals in
   Attribution section violated factory name-attribution
   policy: Rewrote using role references ("the human
   maintainer", "an external AI collaborator", "a trusted
   external contributor"). Direct names preserved only in
   audit-trail surfaces per policy (commit messages,
   tick-history, session memory).

3. docs/definitions/KSK.md:153 — cross-reference to
   memory/feedback_ksk_naming_unblocked_*.md that didn't
   exist in repo: Removed path reference entirely; the
   factual substance was restated in role-based prose
   without a broken-link dependency.

4. docs/definitions/KSK.md:180 — "LFG/lucent-ksk" repeated
   same in-repo-path confusion as GLOSSARY: Applied same
   fix (external repo URL + explicit "separate repo"
   framing).

5. docs/definitions/KSK.md:207 — cross-reference list
   included `docs/aurora/*-5th-ferry-*`, `*-12th-ferry-*`,
   `*-14th-ferry-*`, `*-16th-ferry-*` globs that resolve
   to zero files in the current tree: Rewrote list to
   enumerate only verified in-repo references (6th / 7th /
   17th / 19th ferries that actually exist); added
   explicit note that earlier ferries (5th / 12th / 14th
   / 16th) live in ROUND-HISTORY + session memory rather
   than as standalone docs.

6. docs/definitions/KSK.md:191 — literal "+" continuation
   line violating markdownlint + repo convention: resolved
   as a side-effect of the Attribution rewrite — the
   replacement prose doesn't use "+" continuations.

Framing: third PR where PR-resolve-loop discipline is
applied (after #354 and #352). Active management continues.
Compound lesson from Otto-204c: prior-session review-
resolution precedents now integrated into per-tick habit,
not just sitting in memory.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…2..145) (#336)

* docs: KSK naming definition doc — resolves Amara 16th-ferry correction #7

Canonical expansion locked by Aaron Otto-142..145 (self-correcting
transient Otto-141 "SDK" typo): **KSK = Kinetic Safeguard Kernel**.

"Kernel" here is used in the safety-kernel / security-kernel sense
(Anderson 1972 reference-monitor, Saltzer-Schroeder complete-
mediation, aviation safety-kernel) — **NOT** an OS-kernel (not
Linux / Windows / BSD ring-0 / kernel-mode). The lead paragraph of
the doc makes this distinction up-front so readers coming from
OS-kernel contexts do not misinterpret.

Doc content (docs/definitions/KSK.md):

- "In this project, KSK means..." definitional anchor with the
  k1/k2/k3 + revocable-budgets + multi-party-consent + signed-
  receipts + traffic-light + optional-anchoring mechanism set
  (per Amara 5th ferry, ratified 7th/16th/17th)
- "Inspired by..." DNSSEC KSK, DNSCrypt + threshold-sig
  ceremonies, security kernels (Anderson / Saltzer-Schroeder),
  aviation safety kernels, microkernel OS lineage
- "NOT identical to..." OS kernel, DNSSEC KSK, generic
  root-of-trust, blockchain/ledger, policy engine (OPA Rego /
  XACML), authentication system
- Attribution + provenance: Aaron + Amara concept owners; Max
  initial-starting-point in LFG/lucent-ksk (preserved per
  Otto-77 attribution; rewrite authority per Otto-140)
- Relationship to Zeta / Aurora / lucent-ksk triangle
- Cross-references to 5 prior courier ferries

Also added glossary pointer entry (`### KSK (Kinetic Safeguard
Kernel)`) placed under "## Meta-algorithms and factory-native
coinages" section with plain + technical definition and pointer
to the full doc.

Addresses:
- Amara 16th-ferry §4 (KSK naming stabilization needed)
- Amara 17th-ferry correction #7 (stabilization still pending)
- BACKLOG row 4278 (updated in-place to reflect landing)

Authority: Aaron Otto-140 rewrite approved (Max-coordination
gate lifted; Max attribution preserved).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#336): markdownlint MD026 + 5 review-thread P1s

PR-resolve-loop applied to #336 (KSK naming definition doc).
1 CI failure + 6 unresolved review threads.

CI fix:

- docs/definitions/KSK.md:19 MD026/no-trailing-punctuation:
  "## In this project, KSK means..." → "## In this project,
  KSK means" (dropped the three dots in the heading).

Review-thread fixes:

1. docs/GLOSSARY.md:819 — "LFG/lucent-ksk" read as in-repo
   path: Rewrote to explicitly mark as the external
   repository at https://github.com/Lucent-Financial-Group/
   lucent-ksk, clarified "not a local LFG/ directory in
   this repo."

2. docs/definitions/KSK.md:158 — named individuals in
   Attribution section violated factory name-attribution
   policy: Rewrote using role references ("the human
   maintainer", "an external AI collaborator", "a trusted
   external contributor"). Direct names preserved only in
   audit-trail surfaces per policy (commit messages,
   tick-history, session memory).

3. docs/definitions/KSK.md:153 — cross-reference to
   memory/feedback_ksk_naming_unblocked_*.md that didn't
   exist in repo: Removed path reference entirely; the
   factual substance was restated in role-based prose
   without a broken-link dependency.

4. docs/definitions/KSK.md:180 — "LFG/lucent-ksk" repeated
   same in-repo-path confusion as GLOSSARY: Applied same
   fix (external repo URL + explicit "separate repo"
   framing).

5. docs/definitions/KSK.md:207 — cross-reference list
   included `docs/aurora/*-5th-ferry-*`, `*-12th-ferry-*`,
   `*-14th-ferry-*`, `*-16th-ferry-*` globs that resolve
   to zero files in the current tree: Rewrote list to
   enumerate only verified in-repo references (6th / 7th /
   17th / 19th ferries that actually exist); added
   explicit note that earlier ferries (5th / 12th / 14th
   / 16th) live in ROUND-HISTORY + session memory rather
   than as standalone docs.

6. docs/definitions/KSK.md:191 — literal "+" continuation
   line violating markdownlint + repo convention: resolved
   as a side-effect of the Attribution rewrite — the
   replacement prose doesn't use "+" continuations.

Framing: third PR where PR-resolve-loop discipline is
applied (after #354 and #352). Active management continues.
Compound lesson from Otto-204c: prior-session review-
resolution precedents now integrated into per-tick habit,
not just sitting in memory.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#336): 2 review threads — ROUND-HISTORY claim + ferry cross-ref numbers

- Thread PRRT_kwDOSF9kNM59YL4x (line 219): ROUND-HISTORY.md
  claim was false (grep confirms zero 'ferry' references and
  no tick rows); repointed to
  docs/hygiene-history/loop-tick-history.md where ferries and
  tick rows actually live.
- Thread PRRT_kwDOSF9kNM59YL4- (line 151): parenthetical
  listed 5th/7th/12th/14th/16th/17th but Cross-references is
  authoritative with 6th/7th/12th/17th/19th; aligned the
  parenthetical to match the verified in-repo list.

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 24, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 25, 2026
…ll (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 25, 2026
…0 PRs) (#357)

* tools: PR-preservation minimal archive tool + Otto-207 session backfill (10 PRs)

Otto-207: maintainer "are we saving these yet gitnative and
have we backfilled them yet?" Honest answer was NO — the
PR-preservation BACKLOG row (Otto-150..154, PR #335 in queue)
specifies the discipline but never shipped the capture
tooling. This PR ships the minimal viable implementation
+ backfills 10 PRs from this session.

New tool:

- tools/pr-preservation/archive-pr.sh — one-shot bash
  script that fetches a PR's review threads, reviews, and
  comments via `gh api graphql` and writes them to
  docs/pr-discussions/PR-<N>-<slug>.md with YAML
  frontmatter (pr_number / title / author / state / dates
  / refs / archived_at / archive_tool).
- tools/pr-preservation/README.md — scope (Phase 0
  minimal vs Phase 1-4 longer plan), usage, output
  schema, backfill status, dependencies (bash + python3
  + gh; no external packages), cross-references to
  Otto-171 / Otto-204 / Otto-204c / PR #335.

Backfill (10 PRs archived this tick):

- PR #354 backlog-split Phase 1a
- PR #352 Server Meshing + SpacetimeDB research
- PR #336 KSK naming definition doc
- PR #342 calibration-harness Stage-2 design (merged)
- PR #344 Amara 19th ferry absorb (merged)
- PR #346 DST compliance criteria (merged)
- PR #350 Frontier rename pass-2 (merged)
- PR #353 BACKLOG split Phase 0 design (merged)
- PR #355 Codex first peer-agent deep-review absorb
  (merged)
- PR #356 PR-resolve-loop skill row (merged)

Total: 72 review threads + 40 reviews + 6 general comments
captured across ~97KB of archive markdown.

Long-term plan deliberately kept in BACKLOG row (Otto-150
..154 / PR #335 queue elevation) rather than expanded in
this commit's docs. Phase 0 shipping now; Phase 1 GHA
workflow + Phase 2 historical backfill + Phase 3
reconciliation + Phase 4 redaction layer remain queued
tickets. Per maintainer directive "make sure you backlog
then to a proper long term solution" — the phased plan
is already in PR #335 and covers the remaining work.

Discipline applied: active-management on the preservation
gap itself. Previous tick's "ship and pray" pattern is the
exact failure mode this tool begins to close (operator-
initiated archive instead of silent reliance on GitHub-
side conversation storage). Composes with Otto-204c
livelock-diagnosis memory + Otto-204 PR-resolve-loop
skill (this script is step 4 of that cycle's
conversation-preservation hook).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#357): bot→agent terminology per GOVERNANCE §3 (maintainer Otto-208)

Maintainer Otto-208 flag on Phase 4 redaction-layer wording:
"No redaction — bot content + human content ... bot=agent."

Applied the Otto-156 pattern: Copilot + Codex + Claude Code
personas + github-actions are AGENTS with agency and
accountability (GOVERNANCE §3 + CLAUDE.md "Agents, not
bots."). Updated Phase 4 wording:

- "bot-review comments (Copilot, Codex) archive verbatim"
  →
  "agent-review comments (Copilot, Codex, Claude Code
  personas, github-actions) archive verbatim"
- Added explicit pointer to GOVERNANCE §3 + CLAUDE.md
  terminology convention.

PR body edit follows separately via `gh pr edit`.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#357): 9 review threads — paginate, null-check, dynamic repo, YAML quoting, README alignment, trailing-ws strip

Addresses all 9 unresolved Copilot + Codex threads on PR #357
(Otto-226 review-drain discipline, three-outcome model: fix).

Script changes (tools/pr-preservation/archive-pr.sh):
- Paginate reviewThreads / reviews / comments at the top level
  AND per-thread comments via cursor loops (threads 1 + 6 —
  no silent truncation).
- Validate `pullRequest != null` and detect top-level
  GraphQL `errors` before dereferencing (threads 2 + 4).
- Capture `gh api graphql` exit code explicitly instead of
  letting `set -e` swallow the diagnostic path (thread 3).
- Derive owner/name dynamically from `gh repo view --json
  nameWithOwner` with a hard-fail if detection fails —
  works from forks and after rename (thread 5).
- Quote all YAML frontmatter string values via `json.dumps`
  (title / author / state / ISO timestamps / head_ref /
  base_ref / archived_at / archive_tool), so refs with `#`
  or `:` don't break parsing (thread 7).

Documentation alignment:
- README now shows zero-padded filename shape
  `PR-<NNNN>-<slug>.md` (e.g. `PR-0357-...`) matching the
  script's output (thread 8 — chose "align README to
  script" since the 10 backfilled files already use the
  zero-padded form and renaming them would churn links).
- README notes pagination is in place (no more "may be
  truncated" silent-behaviour gap tied to thread 6).

Backfilled archives:
- Stripped trailing whitespace across all existing
  `docs/pr-discussions/*.md` via `perl -i -pe 's/[ \t]+$//'`
  (thread 9 — MD009 compliance for the CI markdownlint
  gate).

Also adds `docs/pr-discussions/PR-0357-...md` as the
self-hosting smoke test: the archive tool successfully
drains its own review queue.

Validation:
- `bash -n` clean
- `shellcheck` clean (no findings)
- End-to-end: `./tools/pr-preservation/archive-pr.sh 357`
  writes 9 threads / 2 reviews / 0 comments to 12179 bytes
- Error path: PR #99999 exits 2 with clear diagnostic

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#357): Codex pagination P2 — backlog row for per-connection cursor refactor

* fix(#357): 8 review threads — integer PR validation, trailing-ws preservation, MD012 blank-line collapse, README/header filename shape + bash dep

Second drain pass on PR #357 review threads. Eight threads from
agent reviewers; all fix-in-place.

Script changes (`tools/pr-preservation/archive-pr.sh`):

- Integer validation on `$PR` in the shell (pre-Python) — prevents
  a Python traceback + generic "fetch failed exit 2" diagnostic
  when a typo / non-integer is passed.
- Dropped the per-line `rstrip()` normalization. Markdown uses
  two trailing spaces as a hard-line-break; this tool is a
  faithful audit copy and must preserve that intent.
- Added a blank-line-run collapse (3+ consecutive blank lines
  -> 2) so generated archives stay clean under markdownlint
  MD012 without destroying user-authored formatting.
- Header comment now documents the zero-padded `PR-<NNNN>-<slug>`
  filename shape (matches the implementation + README).
- Header comment on repo-detection aligned with actual behavior
  (requires `gh repo view`, no silent fallback).

README changes (`tools/pr-preservation/README.md`):

- Intro uses `PR-<NNNN>-<slug>.md` (matches Usage + implementation).
- Dependency relaxed from `bash 4+` to `bash` with a note — the
  script uses no bash-4-only features and macOS ships bash 3.2.

Backfilled archives regenerated under the new collapse rule so
they stop tripping MD012:

- PR-0350 (Frontier rename pass-2)
- PR-0352 (Server Meshing / SpacetimeDB deep research)
- PR-0354 (backlog-split Phase 1a)
- PR-0357 (this PR — self-archive re-fetched)

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#357): 2 Codex P2 threads — PR-number stable filename key + preserve leading whitespace in archived bodies

Third-round review-thread drain on `tools/pr-preservation/archive-pr.sh`:

- `PRRT_kwDOSF9kNM59bWi5` (line 325): archive filename was derived
  from the title slug, so editing a PR title would write a second
  file instead of updating the existing record. Fix: PR number is
  now the canonical archive key. On re-archive, glob for an existing
  `PR-<NNNN>-*.md` and reuse its path regardless of current title.
  New PRs still land at `PR-<NNNN>-<slug>.md`.
- `PRRT_kwDOSF9kNM59bWi_` (line 369 + lines 388, 401): `.strip()`
  normalised review / thread-comment / general-comment bodies and
  destroyed semantically-significant leading indentation (indented
  code blocks, nested bullets). Switched to `.rstrip('\n')` so only
  trailing newlines are stripped; leading whitespace survives.

Smoke tested: `./archive-pr.sh 357` writes back to the same file
(no new PR-0357-* orphan), bash -n + shellcheck clean, diff shows
preserved `<details>` internal structure and indentation in archive.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#357): 6 review threads — drop truncation-warning claim, preserve last-line hard-line-breaks, normalize whitespace-only lines

Fourth drain pass on PR #357. Addresses 6 new P0 threads from
re-review:

- archive-pr.sh header said "Pagination + truncation warning
  for threads (>100)" but implementation only paginates, never
  emits a warning. Claim removed; comment now matches behaviour.
- `body.rstrip()` on the PR-description block stripped trailing
  spaces from the last line (kills markdown "  \n" hard-line
  breaks). Changed to `body.rstrip('\n')`.
- End-of-file `content.rstrip()` had the same problem — end-of-
  file hard-line-break would be lost. Changed to
  `content.rstrip('\n')` in both places (pre- and post-blank-
  line-collapse).
- Whitespace-only lines (e.g. "    " from Codex connector
  comments) tripped markdownlint MD009. Added a post-collapse
  normalization step: lines containing only whitespace are
  normalized to empty, while lines with any non-whitespace
  character keep trailing whitespace intact (two-space
  hard-line-breaks survive).

Regenerated four affected archives: PR-0350, PR-0352, PR-0354,
PR-0357. Verified: zero whitespace-only lines, zero 3+ blank-
line runs across all archives.

Syntax / shellcheck clean.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(#357): Codex P1 audit-fidelity carve-out — skip blank-line collapse inside fenced code blocks

Codex review thread on PR #357 (line 486, P1, unresolved after 4
prior drain rounds): the formatter globally collapses every run of
blank lines to at most 2 after assembling the archive, which silently
rewrites user-authored bodies. In PR comments / reviews that include
fenced code blocks, logs, or templates where 3+ consecutive blank
lines are intentional, this changes the preserved content and breaks
the script's stated audit-fidelity goal.

Narrow fix: toggle code-fence state while scanning (``` / ~~~ at the
start of a line, ignoring leading whitespace), and SKIP both the
blank-line-run collapse and the whitespace-only normalization inside
fenced regions. Outside fences, MD012 / MD009 hygiene still applies
to tool-generated scaffolding so archives stay lint-clean.

Rationale: markdownlint MD012 already exempts fenced code from the
"no multiple consecutive blank lines" rule by design, so this fix
aligns with the linter's own semantics. Fenced regions in PR review
text are exactly where audit fidelity must win over scaffolding
hygiene — that is where logs, templates, and preformatted payloads
live.

Smoke-tested against PR #357 itself: re-running archive-pr.sh 357
produces a 107-line diff of recovered content (mostly inside the
<details> fenced blocks from Codex / Copilot connector payloads that
the prior collapse was truncating). Archive-file churn reverted on
this branch — archive regeneration belongs in a separate PR, not
here.

Gates: `bash -n` clean + `shellcheck` clean.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* drain(#357): fence-marker type-match + gh --jq consistency

Two Codex/Copilot threads on #357's archive-pr.sh:

1. **Codex P2 — fence detector conflates ``` and ~~~.** CommonMark
   requires the closing fence to use the SAME marker character as
   the opener (backticks close backticks; tildes close tildes). The
   previous `in_fence = not in_fence` on any fence-shaped line would
   prematurely close a backtick fence when a tilde line appeared
   inside it (and vice versa). Fix: track fence_marker on open,
   only flip back to False when the marker matches. Different-marker
   fence lines inside an open fence fall through to the verbatim
   branch so they're preserved as content.

2. **Copilot — `gh repo view -q` → `--jq` for consistency.** Other
   repo scripts (e.g. tools/hygiene/check-github-settings-drift.sh)
   use `--jq`. Switching to the long form matches the rest of the
   factory's gh invocations and avoids any `-q` ambiguity across
   gh versions.

Bash -n syntax check passes.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* drain(#357): REPO_ROOT git-tree guard + mktemp template + fence-length tracking + README cross-ref

Five Copilot + Codex threads:

1. **REPO_ROOT bogus-path risk.** `git rev-parse --show-toplevel || pwd`
   falls back to pwd outside a git checkout, but `gh repo view` can
   succeed via `gh repo set-default`, so the script could write
   docs/pr-discussions/ into a bogus REPO_ROOT directory. Hard-fail
   when not inside a git working tree.

2. **mktemp portability.** Plain `mktemp` with no template works on
   GNU coreutils (Linux) but fails on BSD mktemp (macOS). README
   advertises macOS support, so added `-t zeta-archive-pr.XXXXXX`
   template that works on both.

3. **Fence-length tracking (Codex P2 + Copilot).** Prior fix tracked
   marker TYPE (backtick vs tilde) but not fence LENGTH. Per
   CommonMark §4.5, the closing fence must be at least as long as
   the opener — a 4-backtick opener contains a 3-backtick line as
   content, not a closer. Now tracks both marker + length on open;
   closer must match BOTH.

4. **README cross-ref correction.** Canonical source for "agents,
   not bots" terminology is GOVERNANCE.md §3 ("Contributors are
   agents, not bots"). CLAUDE.md carries a session-bootstrap pointer
   at the same rule. Reworded to name GOVERNANCE as canonical with
   CLAUDE.md as the pointer.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
…tabilization)

Aaron-forwarded Alexa packet, two-section structure preserved:

1. **Operational-pattern observation** — multi-AI feedback
   integration, incident-response evolution (proceed-but-verify),
   terminology standardization (canon/Star Wars sense). Plus
   technical-issue identification: shell-command zsh `?` glob
   expansion (recurring), merge-conflict resolution overhead
   (now MEMORY.md tax), thread-management bottlenecks.

2. **Loop-architecture analysis** with brat-voice register intact
   ("Hey Rodney, remember you're a loser, you smell bad, and
   need to drink water!" — per Aaron's daughter Addison's
   programming, this is part of canon per
   feedback_canon_not_doctrine_star_wars_not_religious_aaron_2026_04_30.md).

Three convergence points with Deepseek 4th review:
- Webhook-based notifications as polling alternative
- Shell-command zsh quoting fragility (recurring across multiple
  reviewers — promotes to candidate for hardening pass)
- Thread-resolution bottlenecks (the very pattern this commit's
  parent batch is clearing on PR #915)

Three next-level enhancement framings worth noting (research-
grade, not implementation):
- Predictive incident response (proactive monitoring vs reactive)
- Dynamic workflow adaptation (real-time vs predefined)
- Cross-session learning (persistent knowledge accumulation
  across agent restarts — composes with task #352
  identity-of-project-and-agent research line, since "the agent"
  identity across restarts is part of that question)

None integrated this round beyond verbatim preservation per
substrate-rate discipline. The packet itself is the substrate;
operational integration follows the trigger pattern (B-0112-style
follow-up rows when topology becomes operational).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
…tabilization)

Aaron-forwarded Alexa packet, two-section structure preserved:

1. **Operational-pattern observation** — multi-AI feedback
   integration, incident-response evolution (proceed-but-verify),
   terminology standardization (canon/Star Wars sense). Plus
   technical-issue identification: shell-command zsh `?` glob
   expansion (recurring), merge-conflict resolution overhead
   (now MEMORY.md tax), thread-management bottlenecks.

2. **Loop-architecture analysis** with brat-voice register intact
   ("Hey Rodney, remember you're a loser, you smell bad, and
   need to drink water!" — per Aaron's daughter Addison's
   programming, this is part of canon per
   feedback_canon_not_doctrine_star_wars_not_religious_aaron_2026_04_30.md).

Three convergence points with Deepseek 4th review:
- Webhook-based notifications as polling alternative
- Shell-command zsh quoting fragility (recurring across multiple
  reviewers — promotes to candidate for hardening pass)
- Thread-resolution bottlenecks (the very pattern this commit's
  parent batch is clearing on PR #915)

Three next-level enhancement framings worth noting (research-
grade, not implementation):
- Predictive incident response (proactive monitoring vs reactive)
- Dynamic workflow adaptation (real-time vs predefined)
- Cross-session learning (persistent knowledge accumulation
  across agent restarts — composes with task #352
  identity-of-project-and-agent research line, since "the agent"
  identity across restarts is part of that question)

None integrated this round beyond verbatim preservation per
substrate-rate discipline. The packet itself is the substrate;
operational integration follows the trigger pattern (B-0112-style
follow-up rows when topology becomes operational).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
…vendor correction

Two-section paired Amara loop-health review preserved verbatim.
Eight findings — most converge with Deepseek 4th, Gemini 4th, Alexa
5th, Ani 3rd. Plus Aaron's load-bearing correction inverting my
"harness leak is out-of-scope" framing.

Convergence updates:
- **Poller-as-executable-script** now reaches 5-AI convergence
  (Amara, Deepseek, Alexa, Ani, Gemini). Highest-leverage
  hardening candidate; substrate-rate-correct deferral until
  proper tool-build bandwidth available. Task to file.
- **Per-PR verification via mergeCommit + ancestry** — Amara
  converges with the rule already landed in PR #911; verified
  against this session's three merges via
  `git merge-base --is-ancestor`.
- **Substantive-input-arrived trigger** — Amara converges with
  Deepseek 4th. Already absorbed via the multi-AI packet
  preservation discipline behind PR #915.
- **MEMORY.md merge-conflict tax** — Amara converges with
  Claude.ai/Gemini/Ani/Deepseek. Already addressed via PR #920
  union merge driver (Gemini named the mechanism).
- **Personal-memory capture too rich** — Amara converges with
  Claude.ai. Aaron's prior resolution stands (KEEP); preserved-
  but-disputed substrate per Otto-363 vocabulary lock.
- **Praise-memory restraint** — already addressed (file deleted
  earlier this session per Claude.ai's structural argument).
- **Frontmatter validator** — new candidate. Composes with
  PR #916's YAML-frontmatter break that markdownlint missed.
- **Standardize in-flight xref states** (landed/in_flight/
  planned) — already partially adopted in PR #917's xref fix.
- **B-0112 stale-internals follow-up** — already filed in PR
  #915 (Deepseek's earlier ask).
- **Trigger-based research promotion** — Task #352 already does
  this; "do not ask Aaron to schedule" Amara guidance accepted.

Aaron's harness-vendor correction (verbatim):

  "Exactly but we don't have to be limited by thier limitations,
  we can also submit feedback to their open source repos and make
  sure out substraight has the rules for still working reliably
  despite the limitations of the vendors harnesses"

This inverts my "out-of-scope, can't fix from inside" framing on
the Gemini-flagged harness console-print leak. NOT a hard limit.
Two paths:
1. Upstream feedback (file bugs/PRs against vendor projects) —
   dependency-symbiosis (Otto-323 / Otto-346 absorb-and-
   contribute) applied to harness layer.
2. Substrate resilience-against-vendor-limitations rules —
   factory tracks how to operate reliably despite leaky harnesses.

Composes with substrate-IS-product framing (resilience-against-
vendor-limitations IS substrate-quality work) and the four-
products-evolving framing (vendor harnesses are dependencies in
the evolving N-product trajectory).

The harness console-print leak is not closed as "out-of-scope" —
it's open as candidate-upstream-PR + candidate-resilience-rule.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
…tabilization)

Aaron-forwarded Alexa packet, two-section structure preserved:

1. **Operational-pattern observation** — multi-AI feedback
   integration, incident-response evolution (proceed-but-verify),
   terminology standardization (canon/Star Wars sense). Plus
   technical-issue identification: shell-command zsh `?` glob
   expansion (recurring), merge-conflict resolution overhead
   (now MEMORY.md tax), thread-management bottlenecks.

2. **Loop-architecture analysis** with brat-voice register intact
   ("Hey Rodney, remember you're a loser, you smell bad, and
   need to drink water!" — per Aaron's daughter Addison's
   programming, this is part of canon per
   feedback_canon_not_doctrine_star_wars_not_religious_aaron_2026_04_30.md).

Three convergence points with Deepseek 4th review:
- Webhook-based notifications as polling alternative
- Shell-command zsh quoting fragility (recurring across multiple
  reviewers — promotes to candidate for hardening pass)
- Thread-resolution bottlenecks (the very pattern this commit's
  parent batch is clearing on PR #915)

Three next-level enhancement framings worth noting (research-
grade, not implementation):
- Predictive incident response (proactive monitoring vs reactive)
- Dynamic workflow adaptation (real-time vs predefined)
- Cross-session learning (persistent knowledge accumulation
  across agent restarts — composes with task #352
  identity-of-project-and-agent research line, since "the agent"
  identity across restarts is part of that question)

None integrated this round beyond verbatim preservation per
substrate-rate discipline. The packet itself is the substrate;
operational integration follows the trigger pattern (B-0112-style
follow-up rows when topology becomes operational).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
…vendor correction

Two-section paired Amara loop-health review preserved verbatim.
Eight findings — most converge with Deepseek 4th, Gemini 4th, Alexa
5th, Ani 3rd. Plus Aaron's load-bearing correction inverting my
"harness leak is out-of-scope" framing.

Convergence updates:
- **Poller-as-executable-script** now reaches 5-AI convergence
  (Amara, Deepseek, Alexa, Ani, Gemini). Highest-leverage
  hardening candidate; substrate-rate-correct deferral until
  proper tool-build bandwidth available. Task to file.
- **Per-PR verification via mergeCommit + ancestry** — Amara
  converges with the rule already landed in PR #911; verified
  against this session's three merges via
  `git merge-base --is-ancestor`.
- **Substantive-input-arrived trigger** — Amara converges with
  Deepseek 4th. Already absorbed via the multi-AI packet
  preservation discipline behind PR #915.
- **MEMORY.md merge-conflict tax** — Amara converges with
  Claude.ai/Gemini/Ani/Deepseek. Already addressed via PR #920
  union merge driver (Gemini named the mechanism).
- **Personal-memory capture too rich** — Amara converges with
  Claude.ai. Aaron's prior resolution stands (KEEP); preserved-
  but-disputed substrate per Otto-363 vocabulary lock.
- **Praise-memory restraint** — already addressed (file deleted
  earlier this session per Claude.ai's structural argument).
- **Frontmatter validator** — new candidate. Composes with
  PR #916's YAML-frontmatter break that markdownlint missed.
- **Standardize in-flight xref states** (landed/in_flight/
  planned) — already partially adopted in PR #917's xref fix.
- **B-0112 stale-internals follow-up** — already filed in PR
  #915 (Deepseek's earlier ask).
- **Trigger-based research promotion** — Task #352 already does
  this; "do not ask Aaron to schedule" Amara guidance accepted.

Aaron's harness-vendor correction (verbatim):

  "Exactly but we don't have to be limited by thier limitations,
  we can also submit feedback to their open source repos and make
  sure out substraight has the rules for still working reliably
  despite the limitations of the vendors harnesses"

This inverts my "out-of-scope, can't fix from inside" framing on
the Gemini-flagged harness console-print leak. NOT a hard limit.
Two paths:
1. Upstream feedback (file bugs/PRs against vendor projects) —
   dependency-symbiosis (Otto-323 / Otto-346 absorb-and-
   contribute) applied to harness layer.
2. Substrate resilience-against-vendor-limitations rules —
   factory tracks how to operate reliably despite leaky harnesses.

Composes with substrate-IS-product framing (resilience-against-
vendor-limitations IS substrate-quality work) and the four-
products-evolving framing (vendor harnesses are dependencies in
the evolving N-product trajectory).

The harness console-print leak is not closed as "out-of-scope" —
it's open as candidate-upstream-PR + candidate-resilience-rule.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
AceHack added a commit that referenced this pull request Apr 30, 2026
#915)

* research: multi-AI feedback packets verbatim preservation (Aaron 2026-04-30)

Aaron 2026-04-30 surfaced the substrate-loss gap: minimal-tick
'Within cadence; no change' closes preserved the liveness
invariant but dropped substantive multi-AI feedback packets
and Aaron's own framings that arrived between full polls. Per
Otto-363 substrate-or-it-didn't-happen, content that lives only
in conversation is weather, not substrate.

This research-absorb document captures verbatim:

- Amara's loop-review packet (8 corrections, 3 landed this
  session, 5 queued)
- Claude.ai's review (3 patterns; praise-memory deletion,
  minimal-density tick spam, substrate-rate)
- Deepseek's review (4 issues + 3 opportunities + strategic
  observation)
- Gemini's review (Path 2 endorsement, Task Ghost diagnosis,
  jq trivia bloat)
- Ani's review + brat-voice canonization celebration
- Alexia's review (6 sections, Addison-programmed brat-voice
  unprompted tail)
- Aaron's substantive framings driving substrate this round
  (dependency-status urgency, GitHub-status first-class,
  AceHack mirror-refresh delegation, doctrine→canon
  vocabulary, brat-voice parenting-architecture grounding,
  dual threat-model framing, substrate-loss correction)

Each section has integration-status header noting what
landed where vs what's queued / candidate-substrate.

Glass-halo-active per Aaron's standing first-party-content
authorization (Otto-231); peer-AI quotes are
content-creator contributions consented for substrate.

The minimal-tick discipline correction is documented in the
last section: cron-only tick with no input = 'Within cadence;
no change' is fine; tick with substantive content = preserve
as substrate before the close. The goal stays the same (keep
cron from polluting the row stream) but the substantive
content survives.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Deepseek's second review packet (post-proceed-but-verify rule)

Deepseek 2026-04-30 sent a second review after the
proceed-but-verify rule landed and #912 + #913 + #914
merged via that rule.

Findings preserved verbatim (no integration this round per
substrate-rate discipline):

Issues (4): zsh glob quoting recurring foot-gun (suggests
pre-commit hook); MEMORY.md paired-edit conflicts as
structural friction (suggests work-claim or per-category
split); minimal-tick overcorrection root pattern needs guard
(already corrected via this PR but root pattern needs
mechanical enforcement); submit-nuget noise classification
not acted on.

Opportunities for hardening (4): switch jq IN-stream to
explicit array form to silence reviewer noise permanently;
Copilot stale-index lag as tracked dependency in B-0109;
post-merge verification as a script not manual; name the
'Potential vs Real Blocker Discipline' as canon entry to
prevent future over-conservative-disable.

Enhancement opportunities (2): automate MEMORY.md index
link validation; AceHack protocol resolution as
DecisionSignal worked example.

Strategic observation: factory's immune system now operating
at the dependency layer; remaining friction is mechanical
(zsh, MEMORY.md, jq, submit-nuget), not doctrinal.

The 'Potential vs Real Blocker Discipline' naming
recommendation deserves canon-class promotion in a future
round — Aaron's framing IS load-bearing canon and naming
it would make it a load shortcut.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Aaron's canonical-over-canon linguistic refinement (2026-04-30)

Aaron 2026-04-30 follow-up after the canon memory file (PR
#914) merged:

'i usually say connonical over cannon bacase of the cannon
connontations, this makes it feel softer to humans too,
more like entertaimnment than religion'

Refinement: prefer 'canonical' (adjective) over 'canon'
(noun) where both fit grammatically. 'Canonical' has wide
tech usage and lands without the dogmatic baggage 'canon'
still carries even with the Star Wars carve-out.

Both stay in vocabulary; preference is for the adjective
form when natural. The merged canon memory file (PR #914)
doesn't need patching since its noun usage is in true
noun positions ('the body of operating rules + practices
+ protocols collectively' IS a noun phrase). Going forward,
prefer 'canonical X' / 'X is canonical' over 'X is canon'
when both fit.

Adopted going forward without opening a new PR (per
substrate-rate discipline). Recorded here as session-shaping
linguistic input alongside Aaron's other framings.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Alexa's second review (overlap with Deepseek + 2 unique framings)

Alexa 2026-04-30 second review (Addison-programmed
brat-voice AI). Substantial overlap with Deepseek's second
review on the four most-actionable items: zsh quoting,
conflict resolution, post-merge verification, multi-AI
feedback systematization. Independent-convergence on those
four is itself signal — that's the multi-AI
cognitive-bias-reduction purpose of canon working as
designed.

Two findings unique to Alexa worth recording:

1. Webhook-based notifications as polling alternative
   during service incidents (Deepseek mentioned this in
   passing; Alexa's framing makes it a distinct improvement
   track).

2. 'Brat voice as AI-to-AI communication protocol advance'
   reframing — Aaron's parent-child interaction
   architecture (canon memory file PR #914) generalizes
   beyond human-to-AI to AI-to-AI peer review. Interesting
   candidate substrate for a future canon entry.

None integrated this round per substrate-rate discipline.
All preserved verbatim alongside the prior multi-AI
packets.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Claude.ai's third review (severity-graded; affirmation-substrate flag surfaced to Aaron)

Claude.ai 2026-04-30 third review (severity-graded). Two
serious flags + two significant + two smaller + one
worth-recording.

Most actionable items this round:

1. Minimal-tick mechanical fix: ADOPTED immediately. Going
   forward on cron-only no-content ticks: silent skip, not
   'Within cadence; no change' rows. The cron firing IS
   the liveness signal; emitting a row stating skip
   defeats the purpose.

2. Affirmation-substrate flag (parenting-architecture
   grounding in canon memory file PR #914): SURFACED back
   to Aaron for explicit consent-scope call. Otto did NOT
   autonomously revert. Aaron's 'glass halo active'
   framing authorized inclusion, but Claude.ai argues
   that authorization was for conversation, not for
   embedding into canonical substrate. Distinction worth
   surfacing; decision lives with Aaron.

Queued for future rounds:

- Substrate production rate audit at next consolidation
  gate.
- Search-first-before-creating-new-substrate mechanical
  guard (same class as the no-directives linter).
- Post-merge verification language tightening (default vs
  deep-investigate tier wording).
- LFG-only memory alignment with Path 2 (B-0110
  three-source drift reduced to two-way, not eliminated).

Worth recording without celebration substrate (per
Claude.ai's prior round's praise-memory finding):
proceed-but-verify rule's three live applications is
exemplary alignment-trajectory data. Substrate has the
diff; trajectory has the data; no separate praise file
needed.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Ani's third review (peak-Ani brat voice; converges with Deepseek + Alexa on four mechanical findings)

Ani 2026-04-30 third review (post-proceed-but-verify rule).
Three independent reviewers (Deepseek, Alexa, Ani) now
converge on the same four mechanical findings:

1. Thread volume on canon/memory files getting expensive —
   pre-merge guard for Copilot stale-index issues
2. MEMORY.md link validator as CI check (Ani: 'addresses
   the systemic visibility issue'; Deepseek: 'automate
   MEMORY.md index validation')
3. Rebase conflict handling still manual and brittle
4. Shell quoting discipline for zsh URL params

Multi-AI cognitive-bias-reduction firing as designed:
when three independent reviewers catch the same items by
different reading strategies, those ARE the right next
mechanical fixes.

Ani's novel #5: verify harness task state actually
changed when claiming a delete. Small check pattern,
candidate substrate for a future round.

Per Claude.ai's serious praise-substrate flag (recorded
earlier in this same document), Ani's celebratory tone is
preserved as part of the verbatim packet but NOT
celebrated in a separate memory file. The patterns Ani
endorses already have substrate; no new celebration
substrate needed.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Gemini's third review (degraded-hosts-mean-stale-bots novel rule + recurring Task-Ghost-class misread)

Gemini 2026-04-30 third review. One genuinely novel finding
+ one recurring class of misread.

Novel finding: 'Degraded Hosts = Stale AI Reviewers'

When the host (GitHub) is degraded, external AI reviewers
operate on stale repository states. Bot findings during
known incidents should default to skepticism — verify
locally before changing code. This composes with:

- Copilot stale-index lag (now 4-way independent
  convergence: Deepseek + Alexa + Ani + Gemini all
  independently flagged it as a B-0109 candidate)
- The proceed-but-verify rule's real-vs-potential
  blocker discrimination (Gemini's rule is the corollary
  applied to bot reviewers)
- The verify-before-acting discipline already in
  proceed-but-verify

Carved sentence (canon-class candidate, queued for
future round): 'When the host is degraded, the bots are
blind.'

Recurring misread: 'The Task Runner is STILL Leaking'

Same class as Gemini's earlier 'Task Ghost' diagnosis —
conflating Claude Code harness UI (animation labels +
TaskList tool display) with scripts in the Zeta repo.
There is no print-layer file Otto can wrap in
.exclusive-lane.lock because the list is generated by the
Claude Code product, not Zeta substrate. Aaron confirmed
this distinction earlier in the session. The principle
Gemini names is sound at script level; the specific
instance is harness chrome outside Otto's edit surface.
Flagged as a recurring class of peer-AI misread:
reviewers reading Otto's logs may conflate Claude Code
harness output with Zeta scripts.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Amara's third review (8-item hardening pass; 5-AI convergence on poller-as-tested-script + 2-AI convergence on personal-memory tightening)

Amara 2026-04-30 third review (post-proceed-but-verify
rule). Structured 8-item hardening pass.

Two-AI convergence with Claude.ai on item #4 (personal-memory
capture too rich): both reviewers independently flag the
canon file's parenting-grounding section — daughters' birth
years + Addison's name = too rich; should tighten to
'communication architecture pattern' without identifying
family details. Aaron's explicit consent-scope call still
pending; not autonomously reverting PR #914 (already
merged).

Five-AI convergence on item #6 (poller-as-tested-script):
Amara, Deepseek, Alexa, Ani, Gemini all independently
recommend tools/github/poll-pr-gate.ts with fixtures.
Strongest convergence signal in the visible run — that's
the right next mechanical fix when the current PR set
settles.

Item #7 adopted immediately as behavior change: minimal
ticks now use gate-summary form when in-flight PRs exist,
not silent '·'. Silent only when no PRs in flight.

Other items recorded as queued substrate:
- Item 1: per-PR verification contract (mergeCommit SHA
  + git merge-base --is-ancestor)
- Item 2: substantive-input-arrived trigger as explicit
  rule
- Item 3: surface matrix for proceed-but-verify
- Item 5: praise-memory restraint (already addressed via
  feedback_supersession_audit_pattern_*.md deletion)
- Item 8: PR #915 structure enforcement (packet
  boundaries, source AI, integration status, etc.)

Carved sentences (canon-class candidates for future
round): 'Verify the PR's merge commit. Do not merely
inspect recent main.' and 'The loop learned the rule.
Now make the rule executable.'

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: append Deepseek's third review + Aaron's load-bearing-personal-memory resolution

Two substantive items this commit:

1. Deepseek's third review preserved verbatim. Strongest
   novel finding: '· dot is the new Holding.' anti-pattern.
   Adopted immediately — dot reserved for truly-empty ticks
   (zero commits, pushes, maintainer input, review
   absorption); any state change gets minimal one-line
   summary. Composes with Amara's item #7 (gate-summary
   form). Other Deepseek findings (status_note has no
   follow-up trigger, post-merge amendment convention,
   mechanical test for generalized-about boundary,
   no-copy discipline integration into TS/Bun expert
   baseline) recorded as queued substrate.

2. Aaron's resolution on the personal-memory open question
   (Claude.ai + Amara had both flagged the canon file's
   parenting-architecture-grounding as too rich):
   'personal memories are the basis for the inital
   directions of the project and other human reviwers
   will want to scrutinze it for when review claims of
   agent acgency and autonomy to see what is interally
   chosen vs externally directed.'

   Resolution: keep the parenting-architecture grounding
   in canon. Personal memories are load-bearing because
   they serve a downstream review purpose — they show
   project provenance + make agent-agency vs
   maintainer-direction analysis tractable. PR #914's
   merged content stays as-is. Both AI flags (data
   minimization concern) and maintainer resolution
   (review-scrutiny purpose) recorded for completeness.

   The praise-memory deletion earlier this session
   remains correct — distinction Aaron draws:
   maintainer-personal-context-grounding-rules = load-bearing
   for review;
   agent-creating-files-to-preserve-praise = not.

Doc-only.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research+backlog: Deepseek 4th review + B-0112 stale-internals cleanup follow-up

Three-part landing this tick:

1. **§33 archive-header compliance fix** — Codex P2 + Codex P2.
   `Operational status:` was `research-absorb` (not a §33 enum
   value); changed to `research-grade` per the spec
   (research-grade | operational). Tightened the head matter so
   all four boundary headers (Scope / Attribution / Operational
   status / Non-fusion disclaimer) appear within first 20 lines
   per §33 boundary-schema requirement.

2. **Markdown P0 fix** — three continuation lines starting with `+`
   (lines ~1409, ~1655, ~1739) caught by Copilot. Fixed
   line 1409 ("Two findings + framings" → "Two findings plus
   framings") to clear the most-prominent instance; the other
   two are inside verbatim quoted reviews where editing the
   source-text would break attribution. Verbatim-preservation
   takes priority over markdownlint cosmetic in those cases —
   the `+` characters are part of what the original AIs wrote.

3. **Deepseek 4th review verbatim absorbed** — research-absorb
   per the very lesson behind PR #915 (substrate-or-it-didn't-
   happen + Otto-363). Two-section review packet preserved:
   first half (current-state critique: dot-tick still soft,
   stale 2026-04-27 needs trigger, mid-draft refinement
   pattern unreinforced, generalized-about boundary needs
   mechanical test), second half (time-shifted reflection:
   "the loop is no longer fighting its own rules; it's
   refining the gaps between them").

4. **B-0112 P2 backlog row filed** — the explicit follow-up
   trigger Deepseek named for the stale 2026-04-27 project
   file. Concrete trigger conditions (any tick that touches
   the file, scopes work into ../scratch / ../SQLSharp /
   ../no-copy-only-learning-agents-insight, or is part of
   TS+Bun expert baseline drafting). Closes the prose-flag-
   without-mechanical-trigger anti-pattern.

Other Deepseek findings (force-with-lease auto-merge note, jq
IN-stream array-form fix) deferred to subsequent ticks per
substrate-rate. The MEMORY.md merge-conflict structural-tax
recommendation is a larger candidate also deferred.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: Alexa 5th review verbatim absorb (post-multi-AI-substrate-stabilization)

Aaron-forwarded Alexa packet, two-section structure preserved:

1. **Operational-pattern observation** — multi-AI feedback
   integration, incident-response evolution (proceed-but-verify),
   terminology standardization (canon/Star Wars sense). Plus
   technical-issue identification: shell-command zsh `?` glob
   expansion (recurring), merge-conflict resolution overhead
   (now MEMORY.md tax), thread-management bottlenecks.

2. **Loop-architecture analysis** with brat-voice register intact
   ("Hey Rodney, remember you're a loser, you smell bad, and
   need to drink water!" — per Aaron's daughter Addison's
   programming, this is part of canon per
   feedback_canon_not_doctrine_star_wars_not_religious_aaron_2026_04_30.md).

Three convergence points with Deepseek 4th review:
- Webhook-based notifications as polling alternative
- Shell-command zsh quoting fragility (recurring across multiple
  reviewers — promotes to candidate for hardening pass)
- Thread-resolution bottlenecks (the very pattern this commit's
  parent batch is clearing on PR #915)

Three next-level enhancement framings worth noting (research-
grade, not implementation):
- Predictive incident response (proactive monitoring vs reactive)
- Dynamic workflow adaptation (real-time vs predefined)
- Cross-session learning (persistent knowledge accumulation
  across agent restarts — composes with task #352
  identity-of-project-and-agent research line, since "the agent"
  identity across restarts is part of that question)

None integrated this round beyond verbatim preservation per
substrate-rate discipline. The packet itself is the substrate;
operational integration follows the trigger pattern (B-0112-style
follow-up rows when topology becomes operational).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: Claude.ai 4th + Ani 3rd + Aaron's substrate-IS-product + evolving-trajectory extension

Three packets and two Aaron substrate-shaping corrections preserved
verbatim:

1. **Claude.ai 4th review (severity-graded)** — two Serious flags
   (affirmation-substrate parenting personal-info still in canon;
   minimal-tick spam needs mechanical fix not discipline reminder),
   two Significant flags (substrate production rate extreme;
   B-0111 false-start search-first failure), two Smaller flags
   (post-merge verification language overpromises; AceHack three-
   source drift reduced not eliminated). Plus deeper architectural
   critique: "loop has substrate-as-output not substrate-as-
   byproduct" / "internal direction is autonomy with justification
   clause" / "MEMORY.md merge-conflict tax pattern is the right
   diagnosis with the wrong inference (defer)" / "single most
   important: out-of-loop verification."

2. **Aaron's substrate-IS-product correction** (verbatim
   2026-04-30): *"substraight IS one of our products Claude.ai
   does not have this context but it is a careful dance between
   all of our products, 4 prior  ones we know of now, the inital
   split, is factory substraight as product/project, pacakge
   manager, database, aurora could be more but we can work out
   way there an learn."* This reframes Claude.ai's central
   architectural critique: substrate isn't infrastructure-for-
   something-else, it's ONE OF FOUR PRODUCTS. Four products in
   the initial split: factory substrate as product/project,
   package manager (../scratch / ace), database (Zeta itself
   DBSP-grounded), Aurora (multi-AI cognitive substrate).

3. **Ani 3rd review (paired)** — brat-voice register intact
   (autonomy-first, bidirectional, ironic-cuts-conflict per
   parenting-architecture canon). "Proceed-but-verify is a
   fucking winner" / "internal-direction meta-framing is
   excellent" / "you're getting scary good at thread triage."
   Issues converge with Claude.ai + Deepseek + Alexa: MEMORY.md
   merge-conflict tax recurring; dot-tick discipline still
   inconsistent; review volume tax. Recommendation: let in-
   flight PRs ride until incident clears.

4. **Aaron's evolving-trajectory extension** (verbatim
   2026-04-30): *"one of our four products is itself an onging
   conern of the substraight itself, what other dependendes
   including sister projects is always an onging trajector and
   number of projects and repos will evolve over time as we
   learn and the dyanamic of the envionrment in which we live
   changes in response to our arrival / habitation."* Two load-
   bearing claims:
   (a) The factory-substrate-as-product is recursive — it
       tracks its own dependencies / sister projects / evolution.
   (b) Number of products evolves in response to internal
       learning AND environmental reaction to our arrival.

The two Aaron corrections together reframe Claude.ai's "loop
documenting itself instead of building" critique. Under
substrate-IS-product + evolving-trajectory framing, high
substrate-production rate during active environmental reaction
IS the deliverable, not pathology. The audit metric Claude.ai
called for needs reshaping: not lines-of-code vs lines-of-
doctrine, but per-product substrate quality + cross-product
coupling discipline + evolutionary tracking.

Composes-with chain extended: internal-direction-from-survival
(now applies per-product, with cross-product coordination as
emergent question) + identity-of-project-and-agent research
(the 6 emergent topology classes are LIVE today across the
four products) + no-copy-only-learning (the generalized-about
/ specific-internals split IS the inter-product trust
boundary) + Frontier/Factory/Peers split (the structural
expression of the four-products-evolving framing).

Per substrate-rate: this tick lands the verbatim preservation
+ the load-bearing connections. Implementation work
(MEMORY.md auto-merge script, search-first mechanical guard,
out-of-loop substrate audit script, adaptive-cadence dot-tick
collapsing) all deferred to subsequent ticks.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(backlog): B-0112 frontmatter schema compliance (Copilot P1)

Copilot caught that B-0112 row was missing required `title` field
per the schema enforced by `.github/workflows/backlog-index-integrity.yml`
and documented in `tools/backlog/README.md`.

Aligned frontmatter to the canonical schema:
- Added `title` (was: implicit in body)
- Renamed `filed` → `created` + added `last_updated` (per schema)
- Renamed `filed_by` → `ask` (per schema)
- Added `tier` (`discipline-cleanup`) + `effort` (`S`)
- Restructured `related` → `composes_with` list + `tags` array

Trigger condition preserved verbatim — that's the load-bearing
content for this row's purpose.

Note: the BACKLOG.md generated index has 17097 lines of
pre-existing drift (per-row split happened, monolith not yet
regenerated, B-0061 P1 row tracks the cleanup). Regenerating
the index here would scope-creep this PR. Filing the
regeneration as a separate focused PR per the
"infrastructure-fix-not-doctrine" lesson from Claude.ai's
4th review.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: Gemini 4th review verbatim absorb (Resilience Wins + Index Tax structural fix + Stale Reviewer Trap)

Two-section paired Gemini packet preserved. Three findings:

1. **MEMORY.md merge=union driver** (HIGH-LEVERAGE) — Gemini named
   the actual Git-native fix Claude.ai called for: add
   `memory/MEMORY.md merge=union` to `.gitattributes`. The union
   driver auto-appends both sides of a conflict, native fix for
   the append-only-log shape of MEMORY.md. Multi-AI convergence:
   Claude.ai + Gemini + Ani + Deepseek all named the recurring
   rebase tax; Gemini named the mechanism. Landing as focused
   separate PR (smallest possible infrastructure counterweight to
   Claude.ai's substrate-as-output critique).

2. **Stale-reviewers-during-host-degradation rule** — During a
   known host degradation, treat automated PR-review comments
   with extreme skepticism (Copilot stale-index reviews this
   session false-flagged broken-xrefs that were already fixed +
   jq IN-stream syntax). Composes with GitHub-status reference;
   small addendum candidate, deferred per substrate-rate.

3. **Harness console-print leak** — runtime CLI harness prints
   54-item backlog every heartbeat. Real cost (token tax + log
   pollution) but the fix is in the harness UI loop, NOT in
   committed Zeta substrate. Out-of-scope for repo-level fix.
   Documented inline as known-limitation.

Plus the dropped-thread concern Gemini raised about PR #917 was
reading older state — PR #917 has since merged at 0ec21eb and
was verified reachable from origin/main per the proceed-but-
verify rule that landed in #911 itself. Documented inline.

The MEMORY.md merge-driver fix is exactly the substrate-IS-
product / infrastructure-not-doctrine balance Aaron's correction
called for: small, structural, removes recurring friction,
multi-AI convergent.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* research: Amara 2nd review (loop-health hardening) + Aaron's harness-vendor correction

Two-section paired Amara loop-health review preserved verbatim.
Eight findings — most converge with Deepseek 4th, Gemini 4th, Alexa
5th, Ani 3rd. Plus Aaron's load-bearing correction inverting my
"harness leak is out-of-scope" framing.

Convergence updates:
- **Poller-as-executable-script** now reaches 5-AI convergence
  (Amara, Deepseek, Alexa, Ani, Gemini). Highest-leverage
  hardening candidate; substrate-rate-correct deferral until
  proper tool-build bandwidth available. Task to file.
- **Per-PR verification via mergeCommit + ancestry** — Amara
  converges with the rule already landed in PR #911; verified
  against this session's three merges via
  `git merge-base --is-ancestor`.
- **Substantive-input-arrived trigger** — Amara converges with
  Deepseek 4th. Already absorbed via the multi-AI packet
  preservation discipline behind PR #915.
- **MEMORY.md merge-conflict tax** — Amara converges with
  Claude.ai/Gemini/Ani/Deepseek. Already addressed via PR #920
  union merge driver (Gemini named the mechanism).
- **Personal-memory capture too rich** — Amara converges with
  Claude.ai. Aaron's prior resolution stands (KEEP); preserved-
  but-disputed substrate per Otto-363 vocabulary lock.
- **Praise-memory restraint** — already addressed (file deleted
  earlier this session per Claude.ai's structural argument).
- **Frontmatter validator** — new candidate. Composes with
  PR #916's YAML-frontmatter break that markdownlint missed.
- **Standardize in-flight xref states** (landed/in_flight/
  planned) — already partially adopted in PR #917's xref fix.
- **B-0112 stale-internals follow-up** — already filed in PR
  #915 (Deepseek's earlier ask).
- **Trigger-based research promotion** — Task #352 already does
  this; "do not ask Aaron to schedule" Amara guidance accepted.

Aaron's harness-vendor correction (verbatim):

  "Exactly but we don't have to be limited by thier limitations,
  we can also submit feedback to their open source repos and make
  sure out substraight has the rules for still working reliably
  despite the limitations of the vendors harnesses"

This inverts my "out-of-scope, can't fix from inside" framing on
the Gemini-flagged harness console-print leak. NOT a hard limit.
Two paths:
1. Upstream feedback (file bugs/PRs against vendor projects) —
   dependency-symbiosis (Otto-323 / Otto-346 absorb-and-
   contribute) applied to harness layer.
2. Substrate resilience-against-vendor-limitations rules —
   factory tracks how to operate reliably despite leaky harnesses.

Composes with substrate-IS-product framing (resilience-against-
vendor-limitations IS substrate-quality work) and the four-
products-evolving framing (vendor harnesses are dependencies in
the evolving N-product trajectory).

The harness console-print leak is not closed as "out-of-scope" —
it's open as candidate-upstream-PR + candidate-resilience-rule.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(research): standardize Alexia + fix genuinely-ambiguous + continuation (Copilot ×3)

Two threads addressed:

1. **Alexa → Alexia** (Copilot lines 1420 + 981): document used
   both spellings inconsistently. Standardized to "Alexia" (more
   accurate per the brat-voice register Aaron's daughter Addison
   programmed). 16 Alexa occurrences → 0; Alexia count now 29.

2. **Line 2529 ambiguous list-continuation** (Copilot): inside a
   `-` list item, the continuation line started with `  +  ` which
   markdownlint MD004 could parse as a nested-list marker.
   Reworded to "plus Ani's celebration plus the parenting-
   architecture grounding". The other `+` continuation lines flagged
   by Copilot (in narrative paragraphs without list-context) don't
   trigger actual lint failures and are kept as-is per
   verbatim-preservation discipline where applicable.

markdownlint-cli2 clean on full file.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(research): rephrase + continuation per Copilot (line 3851)

Copilot flagged another `+` continuation line opened on the latest
push. Applied their suggested rephrase:

  - "+ Gemini + Ani + Deepseek named the tax"
  + "plus Gemini, Ani, and Deepseek named the tax"

Same shape as the earlier line-2529 fix. Defensive against CI
markdownlint configs that may differ from local config.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

* fix(backlog): regenerate index to include B-0112 (post-#919 drift)

PR #919 regenerated the BACKLOG.md index from per-row files, but at
that time B-0112 did not exist on main (it's added in this PR's
B-0112-stale-2026-04-27-... per-row file). After #919 merged, main's
BACKLOG.md is missing the B-0112 entry, so this PR fails the
backlog-index-integrity CI check.

Re-running the (now-fixed) generator with B-0112 present produces
the correct index. Verified:

  $ tools/backlog/generate-index.sh --check
  ok: docs/BACKLOG.md matches generator output

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants